Renewable energy sources play an increasingly important role in the global energy mix, as the effort to reduce the environmental impact of energy production increases.
Out of all the renewable energy alternatives, wind energy is one of the most developed technologies worldwide. The U.S Department of Energy has put together a guide to achieving operational efficiency using predictive maintenance practices.
Predictive maintenance uses sensor information and analysis methods to measure and predict degradation and future component capability. The idea behind predictive maintenance is that failure patterns are predictable and if component failure can be predicted accurately and the component is replaced before it fails, the costs of operation and maintenance will be much lower.
The sensors fitted across different machines involved in the process of energy generation collect data related to various environmental factors (temperature, humidity, wind speed, etc.) and additional features related to various parts of the wind turbine (gearbox, tower, blades, break, etc.).
“ReneWind” is a company working on improving the machinery/processes involved in the production of wind energy using machine learning and has collected data of generator failure of wind turbines using sensors. They have shared a ciphered version of the data, as the data collected through sensors is confidential (the type of data collected varies with companies). Data has 40 predictors, 20000 observations in the training set and 5000 in the test set.
The objective is to build various classification models, tune them, and find the best one that will help identify failures so that the generators could be repaired before failing/breaking to reduce the overall maintenance cost. The nature of predictions made by the classification model will translate as follows:
It is given that the cost of repairing a generator is much less than the cost of replacing it, and the cost of inspection is less than the cost of repair.
“1” in the target variables should be considered as “failure” and “0” represents “No failure”.
The data provided is a transformed version of the original data which was collected using sensors.
Both the datasets consist of 40 predictor variables and 1 target variable.
# Installing the libraries with the specified version
!pip install --no-deps tensorflow==2.18.0 scikit-learn==1.3.2 matplotlib===3.8.3 seaborn==0.13.2 numpy==1.26.4 pandas==2.2.2 -q --user --no-warn-script-location
# Library for data manipulation and analysis.
import pandas as pd
# Fundamental package for scientific computing.
import numpy as np
#splitting datasets into training and testing sets.
from sklearn.model_selection import train_test_split
#Imports tools for data preprocessing including label encoding, one-hot encoding, and standard scaling
from sklearn.preprocessing import LabelEncoder, OneHotEncoder,StandardScaler
#Imports a class for imputing missing values in datasets.
from sklearn.impute import SimpleImputer
#Imports the Matplotlib library for creating visualizations.
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
# Imports the Seaborn library for statistical data visualization.
import seaborn as sns
# Time related functions.
import time
#Imports functions for evaluating the performance of machine learning models
from sklearn.metrics import confusion_matrix, f1_score,accuracy_score, recall_score, precision_score, classification_report
#Imports metrics from
from sklearn import metrics
# Import TensorFlow as tf
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Input, Dropout, BatchNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import backend as K
# to suppress unnecessary warnings
import warnings
warnings.filterwarnings("ignore")
Note:
# uncomment and run the following lines for Google Colab
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
# Read dataset for Train and Test from drive
train_data=pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Neural Network/Train.csv")
test_data=pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Neural Network/Test.csv")
# Keep the copy of original dataset to avoid modifying the original dataframe
df_train_Org=train_data.copy()
df_test_Org=test_data.copy()
# Checking the number of rows and columns in the training data
train_data.shape
(20000, 41)
# Checking the number of rows and columns in the training data
test_data.shape
(5000, 41)
# Checking first few records from train data
train_data.head()
| V1 | V2 | V3 | V4 | V5 | V6 | V7 | V8 | V9 | V10 | ... | V32 | V33 | V34 | V35 | V36 | V37 | V38 | V39 | V40 | Target | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | -4.464606 | -4.679129 | 3.101546 | 0.506130 | -0.221083 | -2.032511 | -2.910870 | 0.050714 | -1.522351 | 3.761892 | ... | 3.059700 | -1.690440 | 2.846296 | 2.235198 | 6.667486 | 0.443809 | -2.369169 | 2.950578 | -3.480324 | 0 |
| 1 | 3.365912 | 3.653381 | 0.909671 | -1.367528 | 0.332016 | 2.358938 | 0.732600 | -4.332135 | 0.565695 | -0.101080 | ... | -1.795474 | 3.032780 | -2.467514 | 1.894599 | -2.297780 | -1.731048 | 5.908837 | -0.386345 | 0.616242 | 0 |
| 2 | -3.831843 | -5.824444 | 0.634031 | -2.418815 | -1.773827 | 1.016824 | -2.098941 | -3.173204 | -2.081860 | 5.392621 | ... | -0.257101 | 0.803550 | 4.086219 | 2.292138 | 5.360850 | 0.351993 | 2.940021 | 3.839160 | -4.309402 | 0 |
| 3 | 1.618098 | 1.888342 | 7.046143 | -1.147285 | 0.083080 | -1.529780 | 0.207309 | -2.493629 | 0.344926 | 2.118578 | ... | -3.584425 | -2.577474 | 1.363769 | 0.622714 | 5.550100 | -1.526796 | 0.138853 | 3.101430 | -1.277378 | 0 |
| 4 | -0.111440 | 3.872488 | -3.758361 | -2.982897 | 3.792714 | 0.544960 | 0.205433 | 4.848994 | -1.854920 | -6.220023 | ... | 8.265896 | 6.629213 | -10.068689 | 1.222987 | -3.229763 | 1.686909 | -2.163896 | -3.644622 | 6.510338 | 0 |
5 rows × 41 columns
# Checking last few records from train data
train_data.tail()
| V1 | V2 | V3 | V4 | V5 | V6 | V7 | V8 | V9 | V10 | ... | V32 | V33 | V34 | V35 | V36 | V37 | V38 | V39 | V40 | Target | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 19995 | -2.071318 | -1.088279 | -0.796174 | -3.011720 | -2.287540 | 2.807310 | 0.481428 | 0.105171 | -0.586599 | -2.899398 | ... | -8.273996 | 5.745013 | 0.589014 | -0.649988 | -3.043174 | 2.216461 | 0.608723 | 0.178193 | 2.927755 | 1 |
| 19996 | 2.890264 | 2.483069 | 5.643919 | 0.937053 | -1.380870 | 0.412051 | -1.593386 | -5.762498 | 2.150096 | 0.272302 | ... | -4.159092 | 1.181466 | -0.742412 | 5.368979 | -0.693028 | -1.668971 | 3.659954 | 0.819863 | -1.987265 | 0 |
| 19997 | -3.896979 | -3.942407 | -0.351364 | -2.417462 | 1.107546 | -1.527623 | -3.519882 | 2.054792 | -0.233996 | -0.357687 | ... | 7.112162 | 1.476080 | -3.953710 | 1.855555 | 5.029209 | 2.082588 | -6.409304 | 1.477138 | -0.874148 | 0 |
| 19998 | -3.187322 | -10.051662 | 5.695955 | -4.370053 | -5.354758 | -1.873044 | -3.947210 | 0.679420 | -2.389254 | 5.456756 | ... | 0.402812 | 3.163661 | 3.752095 | 8.529894 | 8.450626 | 0.203958 | -7.129918 | 4.249394 | -6.112267 | 0 |
| 19999 | -2.686903 | 1.961187 | 6.137088 | 2.600133 | 2.657241 | -4.290882 | -2.344267 | 0.974004 | -1.027462 | 0.497421 | ... | 6.620811 | -1.988786 | -1.348901 | 3.951801 | 5.449706 | -0.455411 | -2.202056 | 1.678229 | -1.974413 | 0 |
5 rows × 41 columns
# Checking first few records from test data
test_data.head()
| V1 | V2 | V3 | V4 | V5 | V6 | V7 | V8 | V9 | V10 | ... | V32 | V33 | V34 | V35 | V36 | V37 | V38 | V39 | V40 | Target | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | -0.613489 | -3.819640 | 2.202302 | 1.300420 | -1.184929 | -4.495964 | -1.835817 | 4.722989 | 1.206140 | -0.341909 | ... | 2.291204 | -5.411388 | 0.870073 | 0.574479 | 4.157191 | 1.428093 | -10.511342 | 0.454664 | -1.448363 | 0 |
| 1 | 0.389608 | -0.512341 | 0.527053 | -2.576776 | -1.016766 | 2.235112 | -0.441301 | -4.405744 | -0.332869 | 1.966794 | ... | -2.474936 | 2.493582 | 0.315165 | 2.059288 | 0.683859 | -0.485452 | 5.128350 | 1.720744 | -1.488235 | 0 |
| 2 | -0.874861 | -0.640632 | 4.084202 | -1.590454 | 0.525855 | -1.957592 | -0.695367 | 1.347309 | -1.732348 | 0.466500 | ... | -1.318888 | -2.997464 | 0.459664 | 0.619774 | 5.631504 | 1.323512 | -1.752154 | 1.808302 | 1.675748 | 0 |
| 3 | 0.238384 | 1.458607 | 4.014528 | 2.534478 | 1.196987 | -3.117330 | -0.924035 | 0.269493 | 1.322436 | 0.702345 | ... | 3.517918 | -3.074085 | -0.284220 | 0.954576 | 3.029331 | -1.367198 | -3.412140 | 0.906000 | -2.450889 | 0 |
| 4 | 5.828225 | 2.768260 | -1.234530 | 2.809264 | -1.641648 | -1.406698 | 0.568643 | 0.965043 | 1.918379 | -2.774855 | ... | 1.773841 | -1.501573 | -2.226702 | 4.776830 | -6.559698 | -0.805551 | -0.276007 | -3.858207 | -0.537694 | 0 |
5 rows × 41 columns
# Checking first few records from test data
test_data.tail()
| V1 | V2 | V3 | V4 | V5 | V6 | V7 | V8 | V9 | V10 | ... | V32 | V33 | V34 | V35 | V36 | V37 | V38 | V39 | V40 | Target | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 4995 | -5.120451 | 1.634804 | 1.251259 | 4.035944 | 3.291204 | -2.932230 | -1.328662 | 1.754066 | -2.984586 | 1.248633 | ... | 9.979118 | 0.063438 | 0.217281 | 3.036388 | 2.109323 | -0.557433 | 1.938718 | 0.512674 | -2.694194 | 0 |
| 4996 | -5.172498 | 1.171653 | 1.579105 | 1.219922 | 2.529627 | -0.668648 | -2.618321 | -2.000545 | 0.633791 | -0.578938 | ... | 4.423900 | 2.603811 | -2.152170 | 0.917401 | 2.156586 | 0.466963 | 0.470120 | 2.196756 | -2.376515 | 0 |
| 4997 | -1.114136 | -0.403576 | -1.764875 | -5.879475 | 3.571558 | 3.710802 | -2.482952 | -0.307614 | -0.921945 | -2.999141 | ... | 3.791778 | 7.481506 | -10.061396 | -0.387166 | 1.848509 | 1.818248 | -1.245633 | -1.260876 | 7.474682 | 0 |
| 4998 | -1.703241 | 0.614650 | 6.220503 | -0.104132 | 0.955916 | -3.278706 | -1.633855 | -0.103936 | 1.388152 | -1.065622 | ... | -4.100352 | -5.949325 | 0.550372 | -1.573640 | 6.823936 | 2.139307 | -4.036164 | 3.436051 | 0.579249 | 0 |
| 4999 | -0.603701 | 0.959550 | -0.720995 | 8.229574 | -1.815610 | -2.275547 | -2.574524 | -1.041479 | 4.129645 | -2.731288 | ... | 2.369776 | -1.062408 | 0.790772 | 4.951955 | -7.440825 | -0.069506 | -0.918083 | -2.291154 | -5.362891 | 0 |
5 rows × 41 columns
# checking datatypes of traindata
train_data.dtypes
| 0 | |
|---|---|
| V1 | float64 |
| V2 | float64 |
| V3 | float64 |
| V4 | float64 |
| V5 | float64 |
| V6 | float64 |
| V7 | float64 |
| V8 | float64 |
| V9 | float64 |
| V10 | float64 |
| V11 | float64 |
| V12 | float64 |
| V13 | float64 |
| V14 | float64 |
| V15 | float64 |
| V16 | float64 |
| V17 | float64 |
| V18 | float64 |
| V19 | float64 |
| V20 | float64 |
| V21 | float64 |
| V22 | float64 |
| V23 | float64 |
| V24 | float64 |
| V25 | float64 |
| V26 | float64 |
| V27 | float64 |
| V28 | float64 |
| V29 | float64 |
| V30 | float64 |
| V31 | float64 |
| V32 | float64 |
| V33 | float64 |
| V34 | float64 |
| V35 | float64 |
| V36 | float64 |
| V37 | float64 |
| V38 | float64 |
| V39 | float64 |
| V40 | float64 |
| Target | int64 |
# checking datatypes of testdata
test_data.dtypes
| 0 | |
|---|---|
| V1 | float64 |
| V2 | float64 |
| V3 | float64 |
| V4 | float64 |
| V5 | float64 |
| V6 | float64 |
| V7 | float64 |
| V8 | float64 |
| V9 | float64 |
| V10 | float64 |
| V11 | float64 |
| V12 | float64 |
| V13 | float64 |
| V14 | float64 |
| V15 | float64 |
| V16 | float64 |
| V17 | float64 |
| V18 | float64 |
| V19 | float64 |
| V20 | float64 |
| V21 | float64 |
| V22 | float64 |
| V23 | float64 |
| V24 | float64 |
| V25 | float64 |
| V26 | float64 |
| V27 | float64 |
| V28 | float64 |
| V29 | float64 |
| V30 | float64 |
| V31 | float64 |
| V32 | float64 |
| V33 | float64 |
| V34 | float64 |
| V35 | float64 |
| V36 | float64 |
| V37 | float64 |
| V38 | float64 |
| V39 | float64 |
| V40 | float64 |
| Target | int64 |
# checking missing values
train_data.isnull().sum()
| 0 | |
|---|---|
| V1 | 18 |
| V2 | 18 |
| V3 | 0 |
| V4 | 0 |
| V5 | 0 |
| V6 | 0 |
| V7 | 0 |
| V8 | 0 |
| V9 | 0 |
| V10 | 0 |
| V11 | 0 |
| V12 | 0 |
| V13 | 0 |
| V14 | 0 |
| V15 | 0 |
| V16 | 0 |
| V17 | 0 |
| V18 | 0 |
| V19 | 0 |
| V20 | 0 |
| V21 | 0 |
| V22 | 0 |
| V23 | 0 |
| V24 | 0 |
| V25 | 0 |
| V26 | 0 |
| V27 | 0 |
| V28 | 0 |
| V29 | 0 |
| V30 | 0 |
| V31 | 0 |
| V32 | 0 |
| V33 | 0 |
| V34 | 0 |
| V35 | 0 |
| V36 | 0 |
| V37 | 0 |
| V38 | 0 |
| V39 | 0 |
| V40 | 0 |
| Target | 0 |
# checking missing values
test_data.isnull().sum()
| 0 | |
|---|---|
| V1 | 5 |
| V2 | 6 |
| V3 | 0 |
| V4 | 0 |
| V5 | 0 |
| V6 | 0 |
| V7 | 0 |
| V8 | 0 |
| V9 | 0 |
| V10 | 0 |
| V11 | 0 |
| V12 | 0 |
| V13 | 0 |
| V14 | 0 |
| V15 | 0 |
| V16 | 0 |
| V17 | 0 |
| V18 | 0 |
| V19 | 0 |
| V20 | 0 |
| V21 | 0 |
| V22 | 0 |
| V23 | 0 |
| V24 | 0 |
| V25 | 0 |
| V26 | 0 |
| V27 | 0 |
| V28 | 0 |
| V29 | 0 |
| V30 | 0 |
| V31 | 0 |
| V32 | 0 |
| V33 | 0 |
| V34 | 0 |
| V35 | 0 |
| V36 | 0 |
| V37 | 0 |
| V38 | 0 |
| V39 | 0 |
| V40 | 0 |
| Target | 0 |
train_data.isna().sum()
| 0 | |
|---|---|
| V1 | 18 |
| V2 | 18 |
| V3 | 0 |
| V4 | 0 |
| V5 | 0 |
| V6 | 0 |
| V7 | 0 |
| V8 | 0 |
| V9 | 0 |
| V10 | 0 |
| V11 | 0 |
| V12 | 0 |
| V13 | 0 |
| V14 | 0 |
| V15 | 0 |
| V16 | 0 |
| V17 | 0 |
| V18 | 0 |
| V19 | 0 |
| V20 | 0 |
| V21 | 0 |
| V22 | 0 |
| V23 | 0 |
| V24 | 0 |
| V25 | 0 |
| V26 | 0 |
| V27 | 0 |
| V28 | 0 |
| V29 | 0 |
| V30 | 0 |
| V31 | 0 |
| V32 | 0 |
| V33 | 0 |
| V34 | 0 |
| V35 | 0 |
| V36 | 0 |
| V37 | 0 |
| V38 | 0 |
| V39 | 0 |
| V40 | 0 |
| Target | 0 |
# checking duplicate values
train_data.duplicated().sum()
np.int64(0)
# checking duplicate values
test_data.duplicated().sum()
np.int64(0)
# function to create labeled barplots
def labeled_barplot(data, feature, perc=False, n=None):
"""
Barplot with percentage at the top
data: dataframe
feature: dataframe column
perc: whether to display percentages instead of count (default is False)
n: displays the top n category levels (default is None, i.e., display all levels)
"""
total = len(data[feature]) # length of the column
count = data[feature].nunique()
if n is None:
plt.figure(figsize=(count + 1, 5))
else:
plt.figure(figsize=(n + 1, 5))
plt.xticks(rotation=90, fontsize=15)
ax = sns.countplot(
data=data,
x=feature,
palette="Paired",
order=data[feature].value_counts().index[:n].sort_values(),
)
for p in ax.patches:
if perc == True:
label = "{:.1f}%".format(
100 * p.get_height() / total
) # percentage of each class of the category
else:
label = p.get_height() # count of each level of the category
x = p.get_x() + p.get_width() / 2 # width of the plot
y = p.get_height() # height of the plot
ax.annotate(
label,
(x, y),
ha="center",
va="center",
size=12,
xytext=(0, 5),
textcoords="offset points",
) # annotate the percentage
plt.show() # show the plot
# function to plot a boxplot and a histogram along the same scale.
def histogram_boxplot(data, feature, figsize=(12, 7), kde=False, bins=None):
"""
Boxplot and histogram combined
data: dataframe
feature: dataframe column
figsize: size of figure (default (12,7))
kde: whether to the show density curve (default False)
bins: number of bins for histogram (default None)
"""
f2, (ax_box2, ax_hist2) = plt.subplots(
nrows=2, # Number of rows of the subplot grid= 2
sharex=True, # x-axis will be shared among all subplots
gridspec_kw={"height_ratios": (0.25, 0.75)},
figsize=figsize,
) # creating the 2 subplots
sns.boxplot(
data=data, x=feature, ax=ax_box2, showmeans=True, color="violet"
) # boxplot will be created and a star will indicate the mean value of the column
sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2, bins=bins, palette="winter"
) if bins else sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2
) # For histogram
ax_hist2.axvline(
data[feature].mean(), color="green", linestyle="--"
) # Add mean to the histogram
ax_hist2.axvline(
data[feature].median(), color="black", linestyle="-"
) # Add median to the histogram
train_data['Target'].value_counts(True)
| proportion | |
|---|---|
| Target | |
| 0 | 0.9445 |
| 1 | 0.0555 |
The dataset is highly imbalanced, with failures (Target=1) making up only 5.55% of the data.
Accuracy is a misleading metric - If a model always predicts "No failure," it will be 94.45% accurate but completely useless for detecting failures.
Recall is critical since failures (Target=1) are rare but costly, your model must focus on identifying them correctly. High recall means many actual failures will be detected.
#histogram_boxplot for all columns
for feature in train_data.columns:
histogram_boxplot(train_data, feature, figsize=(12, 7))
Outliers are present but dispersed in both directions.
There are 18 missing values in columns V1 and V2. Since it is normal distribuition, mean and median are equal. Hence, missing values can be replaces by median
# correlation matrics for traindata
plt.figure(figsize=(25, 10))
sns.heatmap(
train_data.corr(), annot=True, vmin=-1, vmax=1, fmt=".2f", cmap="Spectral"
)
plt.show()
V2 and V38 have a positive correlation and both have a strong negative correlation with V14. (-0.76,-0.85)
V8 and V16 have strong positiv correlation and both have strong negative correlation with V9
V25 and V27 are strongly positively correlated and both have as strong negative correlation with V32 and V24, which are positively correlated to eachother.
V3 and V23 have a strong negative correlation. (-0.79)
# correlation matrics for testdata
plt.figure(figsize=(25, 10))
sns.heatmap(
test_data.corr(), annot=True, vmin=-1, vmax=1, fmt=".2f", cmap="Spectral"
)
plt.show()
There are some feature pairs with strong positive correlations (e.g., V8 and V27: 0.81, V15 and V8: 0.87, etc.). These features may carry redundant information, which can be considered during feature selection (e.g., using PCA or by removing one to reduce multicollinearity).
There are also strong negative correlations (e.g., V13 and V29: -0.85, V2 and V13: -0.85). These suggest potential inverse relationships that may be meaningful for understanding underlying patterns.
#To avoid any data leakage, splitting the dataset in the training file to train and validation.
# separating the independent and dependent variables
X = train_data.drop(["Target"], axis=1)
y = train_data["Target"]
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=1-(5/6), random_state=42,stratify=y)
# Dividing test data into X_test and y_test
X_test = test_data.drop(columns = ['Target'] , axis= 1) # Complete the code to remove the target column
y_test = test_data["Target"] # Complete the code to select the target column
imputer = SimpleImputer(strategy="median")
#filling missing values with median
X_train["V1"] = X_train["V1"].fillna(X_train["V1"].median())
X_train["V2"] = X_train["V2"].fillna(X_train["V2"].median())
#Printing the shape of the dataset.
print("X-Train Data",X_train.shape)
print("y-Train Data",y_train.shape)
print("X-Val Data",X_val.shape)
print("y-Val Data",y_val.shape)
X-Train Data (16666, 40) y-Train Data (16666,) X-Val Data (3334, 40) y-Val Data (3334,)
# Check for NaN or Inf in train and val data
print(np.any(np.isnan(X_train)), np.any(np.isnan(y_train)))
print(np.any(np.isnan(X_val)), np.any(np.isnan(y_val)))
False False True False
# treating nan values
X_train = np.nan_to_num(X_train)
X_val = np.nan_to_num(X_val)
y_train = np.nan_to_num(y_train)
y_val = np.nan_to_num(y_val)
y_test = np.nan_to_num(y_test)
X_test = np.nan_to_num(X_test)
Write down the model evaluation criterion with rationale
Utility functions
def plot(history, name):
"""
Function to plot loss/accuracy
history: an object which stores the metrics and losses.
name: can be one of Loss or Accuracy
"""
fig, ax = plt.subplots() #Creating a subplot with figure and axes.
plt.plot(history.history[name]) #Plotting the train accuracy or train loss
plt.plot(history.history['val_'+name]) #Plotting the validation accuracy or validation loss
plt.title('Model ' + name.capitalize()) #Defining the title of the plot.
plt.ylabel(name.capitalize()) #Capitalizing the first letter.
plt.xlabel('Epoch') #Defining the label for the x-axis.
fig.legend(['Train', 'Validation'], loc="outside right upper") #Defining the legend, loc controls the position of the legend.
# defining a function to compute different metrics to check performance of a classification model built using statsmodels
def model_performance_classification(
model, predictors, target, threshold=0.5
):
"""
Function to compute different metrics to check classification model performance
model: classifier
predictors: independent variables
target: dependent variable
threshold: threshold for classifying the observation as class 1
"""
# checking which probabilities are greater than threshold
pred = model.predict(predictors) > threshold
# pred_temp = model.predict(predictors) > threshold
# # rounding off the above values to get classes
# pred = np.round(pred_temp)
acc = accuracy_score(target, pred) # to compute Accuracy
recall = recall_score(target, pred, average='macro') # to compute Recall
precision = precision_score(target, pred, average='macro') # to compute Precision
f1 = f1_score(target, pred, average='macro') # to compute F1-score
# creating a dataframe of metrics
df_perf = pd.DataFrame(
{"Accuracy": acc, "Recall": recall, "Precision": precision, "F1 Score": f1,}, index = [0]
)
return df_perf
#Defining the columns of the dataframe to track hyper parameters and the metrics.
#columns = ["# hidden layers","# neurons - hidden layer","activation function - hidden layer","# epochs","batch size","train loss","validation loss","train recall","validation recall","train precision","validation precision","time (secs)"]
columns = ["# hidden layers","# neurons - hidden layer","activation function - hidden layer ","# epochs","batch size","optimizer","learning rate, momentum","weight initializer","regularization","train loss","validation loss","train precision","validation precision","train recall","validation recall","time (secs)"]
#Creating a pandas dataframe.
results = pd.DataFrame(columns=columns)
# clears the current Keras session, resetting all layers and models previously created, freeing up memory and resources.
tf.keras.backend.clear_session()
#Initializing the neural network
model_0 = Sequential()
# Add hidden layer with 7 neurons and ReLU activation
model_0.add(Dense(7 ,activation="relu",input_dim=X_train.shape[1]))
# Add output layer with sigmoid activation
model_0.add(Dense(1, activation="sigmoid"))
model_0.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense (Dense) │ (None, 7) │ 287 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_1 (Dense) │ (None, 1) │ 8 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 295 (1.15 KB)
Trainable params: 295 (1.15 KB)
Non-trainable params: 0 (0.00 B)
optimizer = tf.keras.optimizers.SGD()
model_0.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["recall","precision"])
# Fitting the model.
start = time.time()
history = model_0.fit(X_train, y_train, validation_data=(X_val,y_val) , batch_size=X_train.shape[0] , epochs=10)
end=time.time()
Epoch 1/10 1/1 ━━━━━━━━━━━━━━━━━━━━ 2s 2s/step - loss: 1.1371 - precision: 0.0448 - recall: 0.3816 - val_loss: 0.9670 - val_precision: 0.0509 - val_recall: 0.3946 Epoch 2/10 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 558ms/step - loss: 1.0130 - precision: 0.0484 - recall: 0.3805 - val_loss: 0.8658 - val_precision: 0.0559 - val_recall: 0.3946 Epoch 3/10 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 90ms/step - loss: 0.9107 - precision: 0.0521 - recall: 0.3784 - val_loss: 0.7827 - val_precision: 0.0589 - val_recall: 0.3784 Epoch 4/10 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 141ms/step - loss: 0.8262 - precision: 0.0563 - recall: 0.3730 - val_loss: 0.7143 - val_precision: 0.0657 - val_recall: 0.3784 Epoch 5/10 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 136ms/step - loss: 0.7560 - precision: 0.0609 - recall: 0.3654 - val_loss: 0.6578 - val_precision: 0.0718 - val_recall: 0.3784 Epoch 6/10 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 94ms/step - loss: 0.6978 - precision: 0.0673 - recall: 0.3632 - val_loss: 0.6110 - val_precision: 0.0781 - val_recall: 0.3676 Epoch 7/10 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 134ms/step - loss: 0.6493 - precision: 0.0730 - recall: 0.3524 - val_loss: 0.5719 - val_precision: 0.0901 - val_recall: 0.3784 Epoch 8/10 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 88ms/step - loss: 0.6088 - precision: 0.0817 - recall: 0.3514 - val_loss: 0.5391 - val_precision: 0.1040 - val_recall: 0.3838 Epoch 9/10 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 139ms/step - loss: 0.5746 - precision: 0.0911 - recall: 0.3438 - val_loss: 0.5111 - val_precision: 0.1153 - val_recall: 0.3784 Epoch 10/10 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 151ms/step - loss: 0.5455 - precision: 0.1028 - recall: 0.3351 - val_loss: 0.4871 - val_precision: 0.1326 - val_recall: 0.3676
print("Time taken in seconds ",end-start)
Time taken in seconds 3.6597535610198975
#Plotting the loss, recall and precision against each iteration
plot(history,'loss')
plot(history,'recall')
plot(history,'precision')
Classification report
model_0_train_perf = model_performance_classification(model_0, X_train, y_train)
model_0_train_perf
521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.829173 | 0.591073 | 0.537199 | 0.539163 |
model_0_val_perf = model_performance_classification(model_0,X_val,y_val)
model_0_val_perf
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.831434 | 0.613126 | 0.545539 | 0.550353 |
# Displaying result in tabular format
results.loc[0] = [
1,
7, # Convert list to string
'-', # Convert list to string
10,
X_train.shape[0],
'sgd',
'[0.001, -]', # Convert list to string
'xavier',
'-',
float(history.history["loss"][-1]),
float(history.history["val_loss"][-1]),
float(history.history["precision"][-1]),
float(history.history["val_precision"][-1]),
float(history.history["recall"][-1]),
float(history.history["val_recall"][-1]),
round(float(end - start), 2)
]
results
| # hidden layers | # neurons - hidden layer | activation function - hidden layer | # epochs | batch size | optimizer | learning rate, momentum | weight initializer | regularization | train loss | validation loss | train precision | validation precision | train recall | validation recall | time (secs) | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | 7 | - | 10 | 16666 | sgd | [0.001, -] | xavier | - | 0.54552 | 0.487089 | 0.102785 | 0.132554 | 0.335135 | 0.367568 | 3.66 |
SGD Optimizer with Epoch 50
# clears the current Keras session, resetting all layers and models previously created, freeing up memory and resources.
tf.keras.backend.clear_session()
# Initialize model
model_1 = Sequential()
model_1.add(Dense(1, activation="sigmoid", input_dim=X_train.shape[1]))
model_1.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense (Dense) │ (None, 1) │ 41 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 41 (164.00 B)
Trainable params: 41 (164.00 B)
Non-trainable params: 0 (0.00 B)
optimizer = tf.keras.optimizers.SGD()
model_1.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["recall","precision"])
start = time.time()
history = model_1.fit(
X_train, y_train,
validation_data=(X_val, y_val),
batch_size=X_train.shape[0],
epochs=50
)
end = time.time()
Epoch 1/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step - loss: 1.2738 - precision: 0.0800 - recall: 0.6400 - val_loss: 1.0300 - val_precision: 0.0997 - val_recall: 0.7189 Epoch 2/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 110ms/step - loss: 1.0295 - precision: 0.0917 - recall: 0.6432 - val_loss: 0.8440 - val_precision: 0.1120 - val_recall: 0.7027 Epoch 3/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 127ms/step - loss: 0.8420 - precision: 0.1066 - recall: 0.6486 - val_loss: 0.7032 - val_precision: 0.1249 - val_recall: 0.6757 Epoch 4/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 90ms/step - loss: 0.7000 - precision: 0.1222 - recall: 0.6497 - val_loss: 0.5972 - val_precision: 0.1402 - val_recall: 0.6595 Epoch 5/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 128ms/step - loss: 0.5930 - precision: 0.1380 - recall: 0.6519 - val_loss: 0.5171 - val_precision: 0.1532 - val_recall: 0.6432 Epoch 6/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 98ms/step - loss: 0.5121 - precision: 0.1551 - recall: 0.6497 - val_loss: 0.4562 - val_precision: 0.1706 - val_recall: 0.6324 Epoch 7/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 135ms/step - loss: 0.4507 - precision: 0.1744 - recall: 0.6551 - val_loss: 0.4095 - val_precision: 0.1899 - val_recall: 0.6324 Epoch 8/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 135ms/step - loss: 0.4036 - precision: 0.1957 - recall: 0.6627 - val_loss: 0.3733 - val_precision: 0.2011 - val_recall: 0.6054 Epoch 9/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 92ms/step - loss: 0.3672 - precision: 0.2139 - recall: 0.6605 - val_loss: 0.3449 - val_precision: 0.2143 - val_recall: 0.5838 Epoch 10/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 132ms/step - loss: 0.3387 - precision: 0.2295 - recall: 0.6562 - val_loss: 0.3223 - val_precision: 0.2276 - val_recall: 0.5622 Epoch 11/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 101ms/step - loss: 0.3160 - precision: 0.2454 - recall: 0.6541 - val_loss: 0.3041 - val_precision: 0.2465 - val_recall: 0.5676 Epoch 12/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 90ms/step - loss: 0.2979 - precision: 0.2609 - recall: 0.6573 - val_loss: 0.2893 - val_precision: 0.2617 - val_recall: 0.5730 Epoch 13/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 142ms/step - loss: 0.2831 - precision: 0.2751 - recall: 0.6584 - val_loss: 0.2771 - val_precision: 0.2715 - val_recall: 0.5622 Epoch 14/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 131ms/step - loss: 0.2710 - precision: 0.2892 - recall: 0.6595 - val_loss: 0.2670 - val_precision: 0.2803 - val_recall: 0.5622 Epoch 15/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 91ms/step - loss: 0.2609 - precision: 0.2995 - recall: 0.6541 - val_loss: 0.2584 - val_precision: 0.2873 - val_recall: 0.5622 Epoch 16/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 199ms/step - loss: 0.2524 - precision: 0.3120 - recall: 0.6530 - val_loss: 0.2511 - val_precision: 0.2901 - val_recall: 0.5568 Epoch 17/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 142ms/step - loss: 0.2451 - precision: 0.3216 - recall: 0.6519 - val_loss: 0.2448 - val_precision: 0.3029 - val_recall: 0.5568 Epoch 18/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 311ms/step - loss: 0.2389 - precision: 0.3320 - recall: 0.6508 - val_loss: 0.2393 - val_precision: 0.3201 - val_recall: 0.5676 Epoch 19/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 295ms/step - loss: 0.2335 - precision: 0.3415 - recall: 0.6497 - val_loss: 0.2345 - val_precision: 0.3292 - val_recall: 0.5730 Epoch 20/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 293ms/step - loss: 0.2288 - precision: 0.3513 - recall: 0.6476 - val_loss: 0.2303 - val_precision: 0.3344 - val_recall: 0.5676 Epoch 21/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 164ms/step - loss: 0.2246 - precision: 0.3644 - recall: 0.6508 - val_loss: 0.2266 - val_precision: 0.3432 - val_recall: 0.5622 Epoch 22/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 294ms/step - loss: 0.2209 - precision: 0.3731 - recall: 0.6530 - val_loss: 0.2232 - val_precision: 0.3515 - val_recall: 0.5568 Epoch 23/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 297ms/step - loss: 0.2176 - precision: 0.3824 - recall: 0.6519 - val_loss: 0.2202 - val_precision: 0.3592 - val_recall: 0.5514 Epoch 24/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 269ms/step - loss: 0.2146 - precision: 0.3904 - recall: 0.6530 - val_loss: 0.2175 - val_precision: 0.3692 - val_recall: 0.5568 Epoch 25/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 173ms/step - loss: 0.2119 - precision: 0.3985 - recall: 0.6519 - val_loss: 0.2150 - val_precision: 0.3804 - val_recall: 0.5676 Epoch 26/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 163ms/step - loss: 0.2095 - precision: 0.4023 - recall: 0.6519 - val_loss: 0.2127 - val_precision: 0.3895 - val_recall: 0.5622 Epoch 27/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 146ms/step - loss: 0.2072 - precision: 0.4088 - recall: 0.6497 - val_loss: 0.2107 - val_precision: 0.4046 - val_recall: 0.5730 Epoch 28/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 169ms/step - loss: 0.2051 - precision: 0.4152 - recall: 0.6508 - val_loss: 0.2087 - val_precision: 0.4109 - val_recall: 0.5730 Epoch 29/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 234ms/step - loss: 0.2032 - precision: 0.4173 - recall: 0.6465 - val_loss: 0.2069 - val_precision: 0.4190 - val_recall: 0.5730 Epoch 30/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 91ms/step - loss: 0.2015 - precision: 0.4217 - recall: 0.6465 - val_loss: 0.2053 - val_precision: 0.4274 - val_recall: 0.5730 Epoch 31/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 175ms/step - loss: 0.1998 - precision: 0.4274 - recall: 0.6465 - val_loss: 0.2037 - val_precision: 0.4344 - val_recall: 0.5730 Epoch 32/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 97ms/step - loss: 0.1983 - precision: 0.4315 - recall: 0.6465 - val_loss: 0.2023 - val_precision: 0.4339 - val_recall: 0.5676 Epoch 33/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 96ms/step - loss: 0.1968 - precision: 0.4387 - recall: 0.6465 - val_loss: 0.2009 - val_precision: 0.4357 - val_recall: 0.5676 Epoch 34/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 131ms/step - loss: 0.1955 - precision: 0.4399 - recall: 0.6454 - val_loss: 0.1996 - val_precision: 0.4430 - val_recall: 0.5676 Epoch 35/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 93ms/step - loss: 0.1942 - precision: 0.4450 - recall: 0.6476 - val_loss: 0.1984 - val_precision: 0.4449 - val_recall: 0.5676 Epoch 36/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 95ms/step - loss: 0.1930 - precision: 0.4506 - recall: 0.6465 - val_loss: 0.1973 - val_precision: 0.4487 - val_recall: 0.5676 Epoch 37/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 132ms/step - loss: 0.1918 - precision: 0.4537 - recall: 0.6465 - val_loss: 0.1962 - val_precision: 0.4506 - val_recall: 0.5676 Epoch 38/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 175ms/step - loss: 0.1907 - precision: 0.4566 - recall: 0.6476 - val_loss: 0.1951 - val_precision: 0.4506 - val_recall: 0.5676 Epoch 39/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 93ms/step - loss: 0.1897 - precision: 0.4608 - recall: 0.6486 - val_loss: 0.1941 - val_precision: 0.4526 - val_recall: 0.5676 Epoch 40/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 132ms/step - loss: 0.1887 - precision: 0.4655 - recall: 0.6497 - val_loss: 0.1932 - val_precision: 0.4545 - val_recall: 0.5676 Epoch 41/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 99ms/step - loss: 0.1878 - precision: 0.4688 - recall: 0.6486 - val_loss: 0.1923 - val_precision: 0.4565 - val_recall: 0.5676 Epoch 42/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 131ms/step - loss: 0.1869 - precision: 0.4729 - recall: 0.6497 - val_loss: 0.1914 - val_precision: 0.4565 - val_recall: 0.5676 Epoch 43/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 137ms/step - loss: 0.1860 - precision: 0.4762 - recall: 0.6497 - val_loss: 0.1906 - val_precision: 0.4585 - val_recall: 0.5676 Epoch 44/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 92ms/step - loss: 0.1852 - precision: 0.4805 - recall: 0.6519 - val_loss: 0.1898 - val_precision: 0.4605 - val_recall: 0.5676 Epoch 45/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 136ms/step - loss: 0.1844 - precision: 0.4817 - recall: 0.6541 - val_loss: 0.1890 - val_precision: 0.4605 - val_recall: 0.5676 Epoch 46/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 136ms/step - loss: 0.1836 - precision: 0.4821 - recall: 0.6541 - val_loss: 0.1883 - val_precision: 0.4626 - val_recall: 0.5676 Epoch 47/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 91ms/step - loss: 0.1829 - precision: 0.4844 - recall: 0.6530 - val_loss: 0.1876 - val_precision: 0.4646 - val_recall: 0.5676 Epoch 48/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 106ms/step - loss: 0.1822 - precision: 0.4851 - recall: 0.6519 - val_loss: 0.1869 - val_precision: 0.4688 - val_recall: 0.5676 Epoch 49/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 95ms/step - loss: 0.1815 - precision: 0.4879 - recall: 0.6519 - val_loss: 0.1862 - val_precision: 0.4732 - val_recall: 0.5730 Epoch 50/50 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 173ms/step - loss: 0.1808 - precision: 0.4903 - recall: 0.6530 - val_loss: 0.1856 - val_precision: 0.4732 - val_recall: 0.5730
print("Time taken in seconds ",end-start)
Time taken in seconds 8.853889226913452
plot(history,'loss')
plot(history,'precision')
plot(history,'recall')
Lets check the model performance on training and validation data.
model_1_train_perf = model_performance_classification(model_1, X_train, y_train)
model_1_train_perf
521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.943178 | 0.806602 | 0.735131 | 0.765091 |
model_1_val_perf = model_performance_classification(model_1, X_val, y_val)
model_1_val_perf
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.940912 | 0.76775 | 0.723906 | 0.743431 |
results.loc[1] = [
0,
'-',
'-',
50,
16666,
'sgd',
'[0.001, -]',
'xavier',
'-',
float(history.history["loss"][-1]),
float(history.history["val_loss"][-1]),
float(history.history["precision"][-1]),
float(history.history["val_precision"][-1]),
float(history.history["recall"][-1]),
float(history.history["val_recall"][-1]),
round(float(end - start), 2)
]
results
| # hidden layers | # neurons - hidden layer | activation function - hidden layer | # epochs | batch size | optimizer | learning rate, momentum | weight initializer | regularization | train loss | validation loss | train precision | validation precision | train recall | validation recall | time (secs) | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | 7 | - | 10 | 16666 | sgd | [0.001, -] | xavier | - | 0.545520 | 0.487089 | 0.102785 | 0.132554 | 0.335135 | 0.367568 | 3.66 |
| 1 | 0 | - | - | 50 | 16666 | sgd | [0.001, -] | xavier | - | 0.180822 | 0.185569 | 0.490260 | 0.473214 | 0.652973 | 0.572973 | 8.85 |
Precision and recall improved by increasing the epoch.
SGD Optimizer with Batch size 32 and Epoch 10
#Clear session
tf.keras.backend.clear_session()
# Initialize model
model_2 = Sequential()
model_2.add(Dense(1, activation="sigmoid", input_dim=X_train.shape[1]))
model_2.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense (Dense) │ (None, 1) │ 41 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 41 (164.00 B)
Trainable params: 41 (164.00 B)
Non-trainable params: 0 (0.00 B)
optimizer = tf.keras.optimizers.SGD()
model_2.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["recall","precision"])
start = time.time()
history = model_2.fit(
X_train, y_train,
validation_data=(X_val, y_val),
batch_size=32,
epochs=10
)
end = time.time()
Epoch 1/10 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.2100 - precision: 0.4274 - recall: 0.6743 - val_loss: 0.1446 - val_precision: 0.6211 - val_recall: 0.6378 Epoch 2/10 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.1403 - precision: 0.5925 - recall: 0.6166 - val_loss: 0.1373 - val_precision: 0.6744 - val_recall: 0.6270 Epoch 3/10 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1352 - precision: 0.6621 - recall: 0.6332 - val_loss: 0.1317 - val_precision: 0.6914 - val_recall: 0.6054 Epoch 4/10 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1237 - precision: 0.7053 - recall: 0.6371 - val_loss: 0.1301 - val_precision: 0.7266 - val_recall: 0.5459 Epoch 5/10 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1253 - precision: 0.7227 - recall: 0.6004 - val_loss: 0.1253 - val_precision: 0.7432 - val_recall: 0.5946 Epoch 6/10 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1328 - precision: 0.7284 - recall: 0.5668 - val_loss: 0.1243 - val_precision: 0.7431 - val_recall: 0.5784 Epoch 7/10 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1283 - precision: 0.7432 - recall: 0.5833 - val_loss: 0.1229 - val_precision: 0.7000 - val_recall: 0.5676 Epoch 8/10 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1221 - precision: 0.7387 - recall: 0.5803 - val_loss: 0.1210 - val_precision: 0.8099 - val_recall: 0.5297 Epoch 9/10 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1166 - precision: 0.7764 - recall: 0.5607 - val_loss: 0.1194 - val_precision: 0.7879 - val_recall: 0.5622 Epoch 10/10 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.1168 - precision: 0.7704 - recall: 0.5763 - val_loss: 0.1178 - val_precision: 0.7907 - val_recall: 0.5514
print("Time taken in seconds ",end-start)
Time taken in seconds 16.124881505966187
plot(history,'loss')
plot(history,'precision')
plot(history,'recall')
model_2_train_perf = model_performance_classification(model_2, X_train, y_train)
model_2_train_perf
521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.966759 | 0.772787 | 0.878737 | 0.81596 |
model_2_val_perf = model_performance_classification(model_2, X_val, y_val)
model_2_val_perf
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.967007 | 0.771389 | 0.8824 | 0.816185 |
results.loc[2] = [
0,
'-',
'-',
10,
32,
'sgd',
'[0.001, -]',
'xavier',
'-',
float(history.history["loss"][-1]),
float(history.history["val_loss"][-1]),
float(history.history["precision"][-1]),
float(history.history["val_precision"][-1]),
float(history.history["recall"][-1]),
float(history.history["val_recall"][-1]),
round(float(end - start), 2)
]
results
| # hidden layers | # neurons - hidden layer | activation function - hidden layer | # epochs | batch size | optimizer | learning rate, momentum | weight initializer | regularization | train loss | validation loss | train precision | validation precision | train recall | validation recall | time (secs) | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | 7 | - | 10 | 16666 | sgd | [0.001, -] | xavier | - | 0.545520 | 0.487089 | 0.102785 | 0.132554 | 0.335135 | 0.367568 | 3.66 |
| 1 | 0 | - | - | 50 | 16666 | sgd | [0.001, -] | xavier | - | 0.180822 | 0.185569 | 0.490260 | 0.473214 | 0.652973 | 0.572973 | 8.85 |
| 2 | 0 | - | - | 10 | 32 | sgd | [0.001, -] | xavier | - | 0.117435 | 0.117788 | 0.782875 | 0.790698 | 0.553514 | 0.551351 | 16.12 |
Keeping batch size 32 and increasing the epoch to 50
tf.keras.backend.clear_session()
# Initialize model
model_3 = Sequential()
model_3.add(Dense(1, activation="sigmoid", input_dim=X_train.shape[1]))
model_3.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense (Dense) │ (None, 1) │ 41 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 41 (164.00 B)
Trainable params: 41 (164.00 B)
Non-trainable params: 0 (0.00 B)
optimizer = tf.keras.optimizers.SGD()
model_3.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["recall","precision"])
start = time.time()
history = model_3.fit(
X_train, y_train,
validation_data=(X_val, y_val),
batch_size=32,
epochs=50
)
end = time.time()
Epoch 1/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.2570 - precision: 0.3662 - recall: 0.6397 - val_loss: 0.1453 - val_precision: 0.6010 - val_recall: 0.6270 Epoch 2/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.1426 - precision: 0.6195 - recall: 0.6608 - val_loss: 0.1378 - val_precision: 0.6648 - val_recall: 0.6324 Epoch 3/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1343 - precision: 0.6391 - recall: 0.6320 - val_loss: 0.1332 - val_precision: 0.6933 - val_recall: 0.6108 Epoch 4/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1269 - precision: 0.6566 - recall: 0.6338 - val_loss: 0.1319 - val_precision: 0.7083 - val_recall: 0.5514 Epoch 5/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1255 - precision: 0.6993 - recall: 0.6201 - val_loss: 0.1258 - val_precision: 0.7078 - val_recall: 0.5892 Epoch 6/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.1246 - precision: 0.7014 - recall: 0.5876 - val_loss: 0.1234 - val_precision: 0.7704 - val_recall: 0.5622 Epoch 7/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.1217 - precision: 0.7470 - recall: 0.5744 - val_loss: 0.1220 - val_precision: 0.7571 - val_recall: 0.5730 Epoch 8/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.1188 - precision: 0.7417 - recall: 0.5950 - val_loss: 0.1201 - val_precision: 0.7829 - val_recall: 0.5459 Epoch 9/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.1178 - precision: 0.7613 - recall: 0.5663 - val_loss: 0.1190 - val_precision: 0.8197 - val_recall: 0.5405 Epoch 10/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - loss: 0.1131 - precision: 0.8123 - recall: 0.5522 - val_loss: 0.1172 - val_precision: 0.8291 - val_recall: 0.5243 Epoch 11/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.1180 - precision: 0.7869 - recall: 0.5578 - val_loss: 0.1184 - val_precision: 0.7634 - val_recall: 0.5405 Epoch 12/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1125 - precision: 0.8138 - recall: 0.5628 - val_loss: 0.1163 - val_precision: 0.8065 - val_recall: 0.5405 Epoch 13/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1217 - precision: 0.7998 - recall: 0.5204 - val_loss: 0.1150 - val_precision: 0.8000 - val_recall: 0.5405 Epoch 14/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.1207 - precision: 0.8082 - recall: 0.5201 - val_loss: 0.1159 - val_precision: 0.7612 - val_recall: 0.5514 Epoch 15/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.1187 - precision: 0.7792 - recall: 0.5221 - val_loss: 0.1159 - val_precision: 0.7829 - val_recall: 0.5459 Epoch 16/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1132 - precision: 0.8295 - recall: 0.5345 - val_loss: 0.1145 - val_precision: 0.8403 - val_recall: 0.5405 Epoch 17/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1122 - precision: 0.7869 - recall: 0.5382 - val_loss: 0.1144 - val_precision: 0.8246 - val_recall: 0.5081 Epoch 18/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1115 - precision: 0.8042 - recall: 0.5274 - val_loss: 0.1148 - val_precision: 0.8636 - val_recall: 0.5135 Epoch 19/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1156 - precision: 0.8168 - recall: 0.5147 - val_loss: 0.1148 - val_precision: 0.8544 - val_recall: 0.4757 Epoch 20/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.1101 - precision: 0.8423 - recall: 0.5209 - val_loss: 0.1147 - val_precision: 0.8034 - val_recall: 0.5081 Epoch 21/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1174 - precision: 0.8190 - recall: 0.5042 - val_loss: 0.1156 - val_precision: 0.7787 - val_recall: 0.5135 Epoch 22/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.1135 - precision: 0.8454 - recall: 0.5126 - val_loss: 0.1144 - val_precision: 0.8273 - val_recall: 0.4919 Epoch 23/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.1090 - precision: 0.8361 - recall: 0.4919 - val_loss: 0.1132 - val_precision: 0.8476 - val_recall: 0.4811 Epoch 24/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1106 - precision: 0.8537 - recall: 0.4818 - val_loss: 0.1143 - val_precision: 0.7787 - val_recall: 0.5135 Epoch 25/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.1160 - precision: 0.8162 - recall: 0.4989 - val_loss: 0.1145 - val_precision: 0.8051 - val_recall: 0.5135 Epoch 26/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1096 - precision: 0.8243 - recall: 0.5078 - val_loss: 0.1132 - val_precision: 0.8545 - val_recall: 0.5081 Epoch 27/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1059 - precision: 0.8511 - recall: 0.5289 - val_loss: 0.1135 - val_precision: 0.8087 - val_recall: 0.5027 Epoch 28/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1092 - precision: 0.8313 - recall: 0.4965 - val_loss: 0.1151 - val_precision: 0.8198 - val_recall: 0.4919 Epoch 29/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1108 - precision: 0.8310 - recall: 0.5137 - val_loss: 0.1152 - val_precision: 0.8468 - val_recall: 0.5081 Epoch 30/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.1171 - precision: 0.8383 - recall: 0.4911 - val_loss: 0.1149 - val_precision: 0.8468 - val_recall: 0.5081 Epoch 31/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.1129 - precision: 0.8571 - recall: 0.5194 - val_loss: 0.1134 - val_precision: 0.8087 - val_recall: 0.5027 Epoch 32/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1111 - precision: 0.8470 - recall: 0.4908 - val_loss: 0.1129 - val_precision: 0.8585 - val_recall: 0.4919 Epoch 33/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1196 - precision: 0.8027 - recall: 0.4779 - val_loss: 0.1147 - val_precision: 0.7795 - val_recall: 0.5351 Epoch 34/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - loss: 0.1148 - precision: 0.8305 - recall: 0.5018 - val_loss: 0.1145 - val_precision: 0.8571 - val_recall: 0.4865 Epoch 35/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1153 - precision: 0.8428 - recall: 0.4971 - val_loss: 0.1142 - val_precision: 0.8286 - val_recall: 0.4703 Epoch 36/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.1139 - precision: 0.8310 - recall: 0.5101 - val_loss: 0.1141 - val_precision: 0.8364 - val_recall: 0.4973 Epoch 37/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.1147 - precision: 0.8680 - recall: 0.5323 - val_loss: 0.1149 - val_precision: 0.7845 - val_recall: 0.4919 Epoch 38/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.1117 - precision: 0.8314 - recall: 0.5006 - val_loss: 0.1132 - val_precision: 0.8762 - val_recall: 0.4973 Epoch 39/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1099 - precision: 0.8661 - recall: 0.5115 - val_loss: 0.1135 - val_precision: 0.8911 - val_recall: 0.4865 Epoch 40/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1160 - precision: 0.8614 - recall: 0.4888 - val_loss: 0.1146 - val_precision: 0.8214 - val_recall: 0.4973 Epoch 41/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1106 - precision: 0.8523 - recall: 0.4742 - val_loss: 0.1142 - val_precision: 0.8738 - val_recall: 0.4865 Epoch 42/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1104 - precision: 0.8551 - recall: 0.5215 - val_loss: 0.1126 - val_precision: 0.8087 - val_recall: 0.5027 Epoch 43/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1181 - precision: 0.8251 - recall: 0.4567 - val_loss: 0.1143 - val_precision: 0.8053 - val_recall: 0.4919 Epoch 44/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1108 - precision: 0.8276 - recall: 0.4861 - val_loss: 0.1137 - val_precision: 0.8378 - val_recall: 0.5027 Epoch 45/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1086 - precision: 0.8671 - recall: 0.5141 - val_loss: 0.1124 - val_precision: 0.8679 - val_recall: 0.4973 Epoch 46/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.1140 - precision: 0.8414 - recall: 0.4844 - val_loss: 0.1131 - val_precision: 0.8980 - val_recall: 0.4757 Epoch 47/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.1181 - precision: 0.8471 - recall: 0.4812 - val_loss: 0.1128 - val_precision: 0.8120 - val_recall: 0.5135 Epoch 48/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1125 - precision: 0.8711 - recall: 0.4818 - val_loss: 0.1135 - val_precision: 0.8167 - val_recall: 0.5297 Epoch 49/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1160 - precision: 0.8264 - recall: 0.4745 - val_loss: 0.1138 - val_precision: 0.8091 - val_recall: 0.4811 Epoch 50/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.1143 - precision: 0.8393 - recall: 0.4791 - val_loss: 0.1140 - val_precision: 0.8067 - val_recall: 0.5189
print("Time taken in seconds ",end-start)
Time taken in seconds 87.10637378692627
plot(history,'loss')
plot(history,'precision')
plot(history,'recall')
model_3_train_perf = model_performance_classification(model_3, X_train, y_train)
model_3_train_perf
521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.967479 | 0.754852 | 0.903771 | 0.810338 |
model_3_val_perf = model_performance_classification(model_3, X_val, y_val)
model_3_val_perf
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.966407 | 0.755808 | 0.88952 | 0.80699 |
results.loc[3] = [
0,
'-',
'-',
50,
32,
'sgd',
'[0.001, -]',
'xavier',
'-',
float(history.history["loss"][-1]),
float(history.history["val_loss"][-1]),
float(history.history["precision"][-1]),
float(history.history["val_precision"][-1]),
float(history.history["recall"][-1]),
float(history.history["val_recall"][-1]),
round(float(end - start), 2)
]
results
| # hidden layers | # neurons - hidden layer | activation function - hidden layer | # epochs | batch size | optimizer | learning rate, momentum | weight initializer | regularization | train loss | validation loss | train precision | validation precision | train recall | validation recall | time (secs) | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | 7 | - | 10 | 16666 | sgd | [0.001, -] | xavier | - | 0.545520 | 0.487089 | 0.102785 | 0.132554 | 0.335135 | 0.367568 | 3.66 |
| 1 | 0 | - | - | 50 | 16666 | sgd | [0.001, -] | xavier | - | 0.180822 | 0.185569 | 0.490260 | 0.473214 | 0.652973 | 0.572973 | 8.85 |
| 2 | 0 | - | - | 10 | 32 | sgd | [0.001, -] | xavier | - | 0.117435 | 0.117788 | 0.782875 | 0.790698 | 0.553514 | 0.551351 | 16.12 |
| 3 | 0 | - | - | 50 | 32 | sgd | [0.001, -] | xavier | - | 0.113648 | 0.113962 | 0.840074 | 0.806723 | 0.494054 | 0.518919 | 87.11 |
Adam optimizer with lr 0.0001(Reducing the learning rate to 0.0001)
tf.keras.backend.clear_session()
model_4 = Sequential()
model_4.add(Dense(128,activation="tanh",input_dim = X_train.shape[1]))
model_4.add(Dense(64,activation="tanh"))
model_4.add(Dense(1,activation = 'sigmoid'))
model_4.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense (Dense) │ (None, 128) │ 5,248 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_1 (Dense) │ (None, 64) │ 8,256 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_2 (Dense) │ (None, 1) │ 65 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 13,569 (53.00 KB)
Trainable params: 13,569 (53.00 KB)
Non-trainable params: 0 (0.00 B)
optimizer = Adam(learning_rate=0.0001)
model_4.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["precision","recall"])
start = time.time()
history = model_4.fit(
X_train, y_train,
validation_data=(X_val, y_val),
batch_size=32,
epochs=50
)
end = time.time()
Epoch 1/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - loss: 0.3006 - precision: 0.3067 - recall: 0.6125 - val_loss: 0.1167 - val_precision: 0.7118 - val_recall: 0.6541 Epoch 2/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.1086 - precision: 0.6840 - recall: 0.6665 - val_loss: 0.0877 - val_precision: 0.8523 - val_recall: 0.6865 Epoch 3/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0874 - precision: 0.8235 - recall: 0.6957 - val_loss: 0.0728 - val_precision: 0.9456 - val_recall: 0.7514 Epoch 4/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0732 - precision: 0.8930 - recall: 0.7285 - val_loss: 0.0639 - val_precision: 0.9545 - val_recall: 0.7946 Epoch 5/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0704 - precision: 0.9172 - recall: 0.7543 - val_loss: 0.0594 - val_precision: 0.9679 - val_recall: 0.8162 Epoch 6/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0584 - precision: 0.9443 - recall: 0.7842 - val_loss: 0.0564 - val_precision: 0.9748 - val_recall: 0.8378 Epoch 7/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0624 - precision: 0.9509 - recall: 0.7779 - val_loss: 0.0535 - val_precision: 0.9755 - val_recall: 0.8595 Epoch 8/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0506 - precision: 0.9651 - recall: 0.8174 - val_loss: 0.0522 - val_precision: 0.9753 - val_recall: 0.8541 Epoch 9/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0527 - precision: 0.9722 - recall: 0.8046 - val_loss: 0.0514 - val_precision: 0.9632 - val_recall: 0.8486 Epoch 10/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0537 - precision: 0.9777 - recall: 0.8133 - val_loss: 0.0504 - val_precision: 0.9636 - val_recall: 0.8595 Epoch 11/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0478 - precision: 0.9842 - recall: 0.8061 - val_loss: 0.0494 - val_precision: 0.9758 - val_recall: 0.8703 Epoch 12/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0493 - precision: 0.9716 - recall: 0.8306 - val_loss: 0.0483 - val_precision: 0.9752 - val_recall: 0.8486 Epoch 13/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0479 - precision: 0.9788 - recall: 0.8341 - val_loss: 0.0474 - val_precision: 0.9583 - val_recall: 0.8703 Epoch 14/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0462 - precision: 0.9761 - recall: 0.8336 - val_loss: 0.0469 - val_precision: 0.9697 - val_recall: 0.8649 Epoch 15/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0448 - precision: 0.9718 - recall: 0.8380 - val_loss: 0.0473 - val_precision: 0.9641 - val_recall: 0.8703 Epoch 16/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.0419 - precision: 0.9830 - recall: 0.8470 - val_loss: 0.0472 - val_precision: 0.9641 - val_recall: 0.8703 Epoch 17/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0401 - precision: 0.9784 - recall: 0.8390 - val_loss: 0.0467 - val_precision: 0.9586 - val_recall: 0.8757 Epoch 18/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 4s 6ms/step - loss: 0.0448 - precision: 0.9827 - recall: 0.8403 - val_loss: 0.0453 - val_precision: 0.9586 - val_recall: 0.8757 Epoch 19/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 4s 3ms/step - loss: 0.0390 - precision: 0.9743 - recall: 0.8669 - val_loss: 0.0454 - val_precision: 0.9588 - val_recall: 0.8811 Epoch 20/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0382 - precision: 0.9853 - recall: 0.8598 - val_loss: 0.0458 - val_precision: 0.9529 - val_recall: 0.8757 Epoch 21/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0422 - precision: 0.9780 - recall: 0.8429 - val_loss: 0.0451 - val_precision: 0.9588 - val_recall: 0.8811 Epoch 22/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0392 - precision: 0.9867 - recall: 0.8708 - val_loss: 0.0460 - val_precision: 0.9527 - val_recall: 0.8703 Epoch 23/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0356 - precision: 0.9750 - recall: 0.8759 - val_loss: 0.0448 - val_precision: 0.9535 - val_recall: 0.8865 Epoch 24/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0375 - precision: 0.9849 - recall: 0.8562 - val_loss: 0.0445 - val_precision: 0.9647 - val_recall: 0.8865 Epoch 25/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0361 - precision: 0.9866 - recall: 0.8678 - val_loss: 0.0450 - val_precision: 0.9480 - val_recall: 0.8865 Epoch 26/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0336 - precision: 0.9749 - recall: 0.8864 - val_loss: 0.0449 - val_precision: 0.9480 - val_recall: 0.8865 Epoch 27/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0347 - precision: 0.9824 - recall: 0.8704 - val_loss: 0.0447 - val_precision: 0.9645 - val_recall: 0.8811 Epoch 28/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0306 - precision: 0.9868 - recall: 0.8832 - val_loss: 0.0449 - val_precision: 0.9588 - val_recall: 0.8811 Epoch 29/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0288 - precision: 0.9952 - recall: 0.8760 - val_loss: 0.0464 - val_precision: 0.9477 - val_recall: 0.8811 Epoch 30/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0328 - precision: 0.9887 - recall: 0.8859 - val_loss: 0.0449 - val_precision: 0.9538 - val_recall: 0.8919 Epoch 31/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0294 - precision: 0.9837 - recall: 0.8861 - val_loss: 0.0454 - val_precision: 0.9535 - val_recall: 0.8865 Epoch 32/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0297 - precision: 0.9861 - recall: 0.8719 - val_loss: 0.0464 - val_precision: 0.9480 - val_recall: 0.8865 Epoch 33/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0270 - precision: 0.9874 - recall: 0.8961 - val_loss: 0.0454 - val_precision: 0.9588 - val_recall: 0.8811 Epoch 34/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0258 - precision: 0.9915 - recall: 0.9034 - val_loss: 0.0472 - val_precision: 0.9532 - val_recall: 0.8811 Epoch 35/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step - loss: 0.0278 - precision: 0.9917 - recall: 0.9021 - val_loss: 0.0465 - val_precision: 0.9480 - val_recall: 0.8865 Epoch 36/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 6ms/step - loss: 0.0278 - precision: 0.9909 - recall: 0.9028 - val_loss: 0.0466 - val_precision: 0.9535 - val_recall: 0.8865 Epoch 37/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0294 - precision: 0.9831 - recall: 0.8930 - val_loss: 0.0479 - val_precision: 0.9532 - val_recall: 0.8811 Epoch 38/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0260 - precision: 0.9945 - recall: 0.9011 - val_loss: 0.0476 - val_precision: 0.9422 - val_recall: 0.8811 Epoch 39/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0264 - precision: 0.9894 - recall: 0.9042 - val_loss: 0.0469 - val_precision: 0.9535 - val_recall: 0.8865 Epoch 40/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0246 - precision: 0.9926 - recall: 0.9060 - val_loss: 0.0474 - val_precision: 0.9422 - val_recall: 0.8811 Epoch 41/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0269 - precision: 0.9954 - recall: 0.8908 - val_loss: 0.0471 - val_precision: 0.9480 - val_recall: 0.8865 Epoch 42/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0248 - precision: 0.9945 - recall: 0.8854 - val_loss: 0.0465 - val_precision: 0.9532 - val_recall: 0.8811 Epoch 43/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0247 - precision: 0.9911 - recall: 0.8980 - val_loss: 0.0473 - val_precision: 0.9649 - val_recall: 0.8919 Epoch 44/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0232 - precision: 0.9895 - recall: 0.9184 - val_loss: 0.0468 - val_precision: 0.9588 - val_recall: 0.8811 Epoch 45/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0252 - precision: 0.9903 - recall: 0.8951 - val_loss: 0.0474 - val_precision: 0.9538 - val_recall: 0.8919 Epoch 46/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0223 - precision: 0.9955 - recall: 0.9077 - val_loss: 0.0473 - val_precision: 0.9480 - val_recall: 0.8865 Epoch 47/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0186 - precision: 0.9907 - recall: 0.9318 - val_loss: 0.0484 - val_precision: 0.9538 - val_recall: 0.8919 Epoch 48/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0197 - precision: 0.9945 - recall: 0.9283 - val_loss: 0.0484 - val_precision: 0.9422 - val_recall: 0.8811 Epoch 49/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0199 - precision: 0.9940 - recall: 0.9222 - val_loss: 0.0476 - val_precision: 0.9532 - val_recall: 0.8811 Epoch 50/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0183 - precision: 0.9912 - recall: 0.9252 - val_loss: 0.0478 - val_precision: 0.9593 - val_recall: 0.8919
print("Time taken in seconds ",end-start)
Time taken in seconds 119.18345022201538
plot(history,'loss')
plot(history,'precision')
plot(history,'recall')
model_4_train_perf = model_performance_classification(model_4, X_train, y_train)
model_4_train_perf
521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.99532 | 0.959873 | 0.995321 | 0.976854 |
model_4_val_perf = model_performance_classification(model_4, X_val, y_val)
model_4_val_perf
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.991902 | 0.944834 | 0.976489 | 0.960046 |
results.loc[4] = [
2,
'[128,64]',
'[tanh,tanh]',
50,
32,
'Adam',
'[0.0001, -]',
'xavier',
'-',
float(history.history["loss"][-1]),
float(history.history["val_loss"][-1]),
float(history.history["precision"][-1]),
float(history.history["val_precision"][-1]),
float(history.history["recall"][-1]),
float(history.history["val_recall"][-1]),
round(float(end - start), 2)
]
results
| # hidden layers | # neurons - hidden layer | activation function - hidden layer | # epochs | batch size | optimizer | learning rate, momentum | weight initializer | regularization | train loss | validation loss | train precision | validation precision | train recall | validation recall | time (secs) | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | 7 | - | 10 | 16666 | sgd | [0.001, -] | xavier | - | 0.545520 | 0.487089 | 0.102785 | 0.132554 | 0.335135 | 0.367568 | 3.66 |
| 1 | 0 | - | - | 50 | 16666 | sgd | [0.001, -] | xavier | - | 0.180822 | 0.185569 | 0.490260 | 0.473214 | 0.652973 | 0.572973 | 8.85 |
| 2 | 0 | - | - | 10 | 32 | sgd | [0.001, -] | xavier | - | 0.117435 | 0.117788 | 0.782875 | 0.790698 | 0.553514 | 0.551351 | 16.12 |
| 3 | 0 | - | - | 50 | 32 | sgd | [0.001, -] | xavier | - | 0.113648 | 0.113962 | 0.840074 | 0.806723 | 0.494054 | 0.518919 | 87.11 |
| 4 | 2 | [128,64] | [tanh,tanh] | 50 | 32 | Adam | [0.0001, -] | xavier | - | 0.021311 | 0.047761 | 0.990621 | 0.959302 | 0.913514 | 0.891892 | 119.18 |
Adam optimizer with lr 0.0001 and Dropout 0.2
tf.keras.backend.clear_session()
# defining the dropout ratio
# 20% of the neurons will be switched off
dropout_rate = 0.2
model_5= Sequential()
model_5.add(Dense(128,activation="tanh",input_dim = X_train.shape[1]))
model_5.add(Dropout(dropout_rate))
model_5.add(Dense(64,activation="tanh"))
model_5.add(Dense(1,activation = 'sigmoid'))
model_5.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense (Dense) │ (None, 128) │ 5,248 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout (Dropout) │ (None, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_1 (Dense) │ (None, 64) │ 8,256 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_2 (Dense) │ (None, 1) │ 65 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 13,569 (53.00 KB)
Trainable params: 13,569 (53.00 KB)
Non-trainable params: 0 (0.00 B)
#optimizer = keras.optimizers.Adam()
optimizer = Adam(learning_rate=0.0001)
model_5.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["precision","recall"])
start = time.time()
history = model_5.fit(
X_train, y_train,
validation_data=(X_val, y_val),
batch_size=32,
epochs=50
)
end = time.time()
Epoch 1/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.3292 - precision: 0.2445 - recall: 0.5479 - val_loss: 0.1263 - val_precision: 0.6395 - val_recall: 0.5946 Epoch 2/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.1299 - precision: 0.6425 - recall: 0.6381 - val_loss: 0.0949 - val_precision: 0.8200 - val_recall: 0.6649 Epoch 3/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.1010 - precision: 0.7866 - recall: 0.6884 - val_loss: 0.0783 - val_precision: 0.8828 - val_recall: 0.6919 Epoch 4/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0809 - precision: 0.8511 - recall: 0.7235 - val_loss: 0.0694 - val_precision: 0.9329 - val_recall: 0.7514 Epoch 5/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0862 - precision: 0.8710 - recall: 0.6967 - val_loss: 0.0638 - val_precision: 0.9484 - val_recall: 0.7946 Epoch 6/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0683 - precision: 0.9301 - recall: 0.7520 - val_loss: 0.0605 - val_precision: 0.9438 - val_recall: 0.8162 Epoch 7/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0733 - precision: 0.9345 - recall: 0.7532 - val_loss: 0.0576 - val_precision: 0.9742 - val_recall: 0.8162 Epoch 8/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0681 - precision: 0.9312 - recall: 0.7447 - val_loss: 0.0550 - val_precision: 0.9811 - val_recall: 0.8432 Epoch 9/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0647 - precision: 0.9423 - recall: 0.7549 - val_loss: 0.0541 - val_precision: 0.9747 - val_recall: 0.8324 Epoch 10/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0627 - precision: 0.9484 - recall: 0.7751 - val_loss: 0.0525 - val_precision: 0.9691 - val_recall: 0.8486 Epoch 11/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0577 - precision: 0.9655 - recall: 0.7821 - val_loss: 0.0515 - val_precision: 0.9753 - val_recall: 0.8541 Epoch 12/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0567 - precision: 0.9483 - recall: 0.7980 - val_loss: 0.0495 - val_precision: 0.9811 - val_recall: 0.8432 Epoch 13/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0570 - precision: 0.9733 - recall: 0.8027 - val_loss: 0.0488 - val_precision: 0.9811 - val_recall: 0.8432 Epoch 14/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0563 - precision: 0.9565 - recall: 0.7945 - val_loss: 0.0491 - val_precision: 0.9810 - val_recall: 0.8378 Epoch 15/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0510 - precision: 0.9633 - recall: 0.8079 - val_loss: 0.0479 - val_precision: 0.9812 - val_recall: 0.8486 Epoch 16/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0564 - precision: 0.9559 - recall: 0.7886 - val_loss: 0.0475 - val_precision: 0.9811 - val_recall: 0.8432 Epoch 17/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0570 - precision: 0.9636 - recall: 0.7830 - val_loss: 0.0465 - val_precision: 0.9752 - val_recall: 0.8486 Epoch 18/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0570 - precision: 0.9570 - recall: 0.7876 - val_loss: 0.0465 - val_precision: 0.9815 - val_recall: 0.8595 Epoch 19/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0566 - precision: 0.9385 - recall: 0.7925 - val_loss: 0.0458 - val_precision: 0.9875 - val_recall: 0.8541 Epoch 20/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0493 - precision: 0.9696 - recall: 0.8138 - val_loss: 0.0456 - val_precision: 0.9753 - val_recall: 0.8541 Epoch 21/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0503 - precision: 0.9424 - recall: 0.8042 - val_loss: 0.0452 - val_precision: 0.9755 - val_recall: 0.8595 Epoch 22/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0456 - precision: 0.9602 - recall: 0.8335 - val_loss: 0.0450 - val_precision: 0.9695 - val_recall: 0.8595 Epoch 23/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0497 - precision: 0.9662 - recall: 0.8119 - val_loss: 0.0441 - val_precision: 0.9753 - val_recall: 0.8541 Epoch 24/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0505 - precision: 0.9692 - recall: 0.8245 - val_loss: 0.0436 - val_precision: 0.9756 - val_recall: 0.8649 Epoch 25/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0508 - precision: 0.9791 - recall: 0.8091 - val_loss: 0.0440 - val_precision: 0.9815 - val_recall: 0.8595 Epoch 26/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0486 - precision: 0.9479 - recall: 0.8124 - val_loss: 0.0431 - val_precision: 0.9756 - val_recall: 0.8649 Epoch 27/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0474 - precision: 0.9608 - recall: 0.8452 - val_loss: 0.0441 - val_precision: 0.9758 - val_recall: 0.8703 Epoch 28/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0490 - precision: 0.9748 - recall: 0.8382 - val_loss: 0.0428 - val_precision: 0.9758 - val_recall: 0.8703 Epoch 29/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0432 - precision: 0.9619 - recall: 0.8502 - val_loss: 0.0425 - val_precision: 0.9756 - val_recall: 0.8649 Epoch 30/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0458 - precision: 0.9651 - recall: 0.8237 - val_loss: 0.0420 - val_precision: 0.9817 - val_recall: 0.8703 Epoch 31/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0457 - precision: 0.9678 - recall: 0.8288 - val_loss: 0.0419 - val_precision: 0.9759 - val_recall: 0.8757 Epoch 32/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0474 - precision: 0.9780 - recall: 0.8180 - val_loss: 0.0421 - val_precision: 0.9758 - val_recall: 0.8703 Epoch 33/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0466 - precision: 0.9741 - recall: 0.8190 - val_loss: 0.0422 - val_precision: 0.9818 - val_recall: 0.8757 Epoch 34/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0414 - precision: 0.9764 - recall: 0.8502 - val_loss: 0.0414 - val_precision: 0.9756 - val_recall: 0.8649 Epoch 35/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0427 - precision: 0.9757 - recall: 0.8463 - val_loss: 0.0412 - val_precision: 0.9756 - val_recall: 0.8649 Epoch 36/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0398 - precision: 0.9740 - recall: 0.8623 - val_loss: 0.0412 - val_precision: 0.9759 - val_recall: 0.8757 Epoch 37/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0462 - precision: 0.9641 - recall: 0.8260 - val_loss: 0.0413 - val_precision: 0.9701 - val_recall: 0.8757 Epoch 38/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0397 - precision: 0.9740 - recall: 0.8482 - val_loss: 0.0410 - val_precision: 0.9760 - val_recall: 0.8811 Epoch 39/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0437 - precision: 0.9731 - recall: 0.8515 - val_loss: 0.0421 - val_precision: 0.9759 - val_recall: 0.8757 Epoch 40/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0450 - precision: 0.9754 - recall: 0.8337 - val_loss: 0.0413 - val_precision: 0.9760 - val_recall: 0.8811 Epoch 41/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0448 - precision: 0.9810 - recall: 0.8256 - val_loss: 0.0404 - val_precision: 0.9820 - val_recall: 0.8865 Epoch 42/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0428 - precision: 0.9800 - recall: 0.8553 - val_loss: 0.0405 - val_precision: 0.9762 - val_recall: 0.8865 Epoch 43/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0428 - precision: 0.9719 - recall: 0.8497 - val_loss: 0.0408 - val_precision: 0.9762 - val_recall: 0.8865 Epoch 44/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0377 - precision: 0.9689 - recall: 0.8513 - val_loss: 0.0404 - val_precision: 0.9762 - val_recall: 0.8865 Epoch 45/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0418 - precision: 0.9793 - recall: 0.8497 - val_loss: 0.0405 - val_precision: 0.9762 - val_recall: 0.8865 Epoch 46/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0446 - precision: 0.9708 - recall: 0.8433 - val_loss: 0.0407 - val_precision: 0.9760 - val_recall: 0.8811 Epoch 47/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0353 - precision: 0.9586 - recall: 0.8540 - val_loss: 0.0404 - val_precision: 0.9762 - val_recall: 0.8865 Epoch 48/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0373 - precision: 0.9676 - recall: 0.8675 - val_loss: 0.0402 - val_precision: 0.9763 - val_recall: 0.8919 Epoch 49/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0372 - precision: 0.9851 - recall: 0.8579 - val_loss: 0.0404 - val_precision: 0.9765 - val_recall: 0.8973 Epoch 50/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0388 - precision: 0.9775 - recall: 0.8700 - val_loss: 0.0405 - val_precision: 0.9762 - val_recall: 0.8865
print("Time taken in seconds ",end-start)
Time taken in seconds 117.47680640220642
plot(history,'loss')
plot(history,'precision')
plot(history,'recall')
model_5_train_perf = model_performance_classification(model_5, X_train, y_train)
model_5_train_perf
521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.99262 | 0.937584 | 0.991482 | 0.962769 |
model_5_val_perf = model_performance_classification(model_5, X_val, y_val)
model_5_val_perf
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.992501 | 0.942608 | 0.984779 | 0.96261 |
results.loc[5] = [
2,
'[128,64]',
'[tanh,tanh]',
50,
32,
'Adam',
'[0.0001, -]',
'xavier',
'Dropout=0.2',
float(history.history["loss"][-1]),
float(history.history["val_loss"][-1]),
float(history.history["precision"][-1]),
float(history.history["val_precision"][-1]),
float(history.history["recall"][-1]),
float(history.history["val_recall"][-1]),
round(float(end - start), 2)
]
results
| # hidden layers | # neurons - hidden layer | activation function - hidden layer | # epochs | batch size | optimizer | learning rate, momentum | weight initializer | regularization | train loss | validation loss | train precision | validation precision | train recall | validation recall | time (secs) | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | 7 | - | 10 | 16666 | sgd | [0.001, -] | xavier | - | 0.545520 | 0.487089 | 0.102785 | 0.132554 | 0.335135 | 0.367568 | 3.66 |
| 1 | 0 | - | - | 50 | 16666 | sgd | [0.001, -] | xavier | - | 0.180822 | 0.185569 | 0.490260 | 0.473214 | 0.652973 | 0.572973 | 8.85 |
| 2 | 0 | - | - | 10 | 32 | sgd | [0.001, -] | xavier | - | 0.117435 | 0.117788 | 0.782875 | 0.790698 | 0.553514 | 0.551351 | 16.12 |
| 3 | 0 | - | - | 50 | 32 | sgd | [0.001, -] | xavier | - | 0.113648 | 0.113962 | 0.840074 | 0.806723 | 0.494054 | 0.518919 | 87.11 |
| 4 | 2 | [128,64] | [tanh,tanh] | 50 | 32 | Adam | [0.0001, -] | xavier | - | 0.021311 | 0.047761 | 0.990621 | 0.959302 | 0.913514 | 0.891892 | 119.18 |
| 5 | 2 | [128,64] | [tanh,tanh] | 50 | 32 | Adam | [0.0001, -] | xavier | Dropout=0.2 | 0.037212 | 0.040524 | 0.981618 | 0.976190 | 0.865946 | 0.886486 | 117.48 |
Adam optimizer with lr 0.0001 and Dropout 0.2 with He Weight initialisation
tf.keras.backend.clear_session()
# defining the dropout ratio
dropout_rate = 0.2
model_6 = Sequential()
model_6.add(Dense(128,activation="tanh",input_dim = X_train.shape[1], kernel_initializer='he_normal'))
model_6.add(Dropout(dropout_rate))
model_6.add(Dense(64,activation="tanh"))
model_6.add(Dense(1,activation = 'sigmoid'))
optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)
model_6.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["precision","recall"])
model_6.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense (Dense) │ (None, 128) │ 5,248 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout (Dropout) │ (None, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_1 (Dense) │ (None, 64) │ 8,256 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_2 (Dense) │ (None, 1) │ 65 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 13,569 (53.00 KB)
Trainable params: 13,569 (53.00 KB)
Non-trainable params: 0 (0.00 B)
start = time.time()
history = model_6.fit(
X_train, y_train,
validation_data=(X_val, y_val),
batch_size=32,
epochs=50
)
end = time.time()
Epoch 1/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - loss: 0.3596 - precision: 0.2343 - recall: 0.5597 - val_loss: 0.1322 - val_precision: 0.5959 - val_recall: 0.6216 Epoch 2/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.1274 - precision: 0.6137 - recall: 0.6598 - val_loss: 0.1013 - val_precision: 0.7469 - val_recall: 0.6541 Epoch 3/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.1088 - precision: 0.7209 - recall: 0.6667 - val_loss: 0.0889 - val_precision: 0.8552 - val_recall: 0.6703 Epoch 4/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0951 - precision: 0.7808 - recall: 0.6829 - val_loss: 0.0798 - val_precision: 0.8929 - val_recall: 0.6757 Epoch 5/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0840 - precision: 0.8124 - recall: 0.7072 - val_loss: 0.0732 - val_precision: 0.9203 - val_recall: 0.6865 Epoch 6/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0912 - precision: 0.8763 - recall: 0.6602 - val_loss: 0.0690 - val_precision: 0.9424 - val_recall: 0.7081 Epoch 7/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0784 - precision: 0.8379 - recall: 0.6884 - val_loss: 0.0654 - val_precision: 0.9504 - val_recall: 0.7243 Epoch 8/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0737 - precision: 0.8919 - recall: 0.7240 - val_loss: 0.0622 - val_precision: 0.9467 - val_recall: 0.7676 Epoch 9/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0663 - precision: 0.9035 - recall: 0.7438 - val_loss: 0.0600 - val_precision: 0.9732 - val_recall: 0.7838 Epoch 10/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 5ms/step - loss: 0.0662 - precision: 0.9413 - recall: 0.7509 - val_loss: 0.0589 - val_precision: 0.9737 - val_recall: 0.8000 Epoch 11/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 7s 8ms/step - loss: 0.0637 - precision: 0.9390 - recall: 0.7619 - val_loss: 0.0576 - val_precision: 0.9735 - val_recall: 0.7946 Epoch 12/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0569 - precision: 0.9550 - recall: 0.7611 - val_loss: 0.0561 - val_precision: 0.9868 - val_recall: 0.8054 Epoch 13/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0609 - precision: 0.9491 - recall: 0.7736 - val_loss: 0.0551 - val_precision: 0.9673 - val_recall: 0.8000 Epoch 14/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0630 - precision: 0.9521 - recall: 0.7746 - val_loss: 0.0546 - val_precision: 0.9677 - val_recall: 0.8108 Epoch 15/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0611 - precision: 0.9359 - recall: 0.7756 - val_loss: 0.0530 - val_precision: 0.9677 - val_recall: 0.8108 Epoch 16/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0592 - precision: 0.9412 - recall: 0.7772 - val_loss: 0.0515 - val_precision: 0.9742 - val_recall: 0.8162 Epoch 17/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0510 - precision: 0.9511 - recall: 0.8140 - val_loss: 0.0509 - val_precision: 0.9809 - val_recall: 0.8324 Epoch 18/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0526 - precision: 0.9539 - recall: 0.8052 - val_loss: 0.0499 - val_precision: 0.9623 - val_recall: 0.8270 Epoch 19/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0508 - precision: 0.9340 - recall: 0.8174 - val_loss: 0.0502 - val_precision: 0.9568 - val_recall: 0.8378 Epoch 20/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0561 - precision: 0.9570 - recall: 0.8023 - val_loss: 0.0500 - val_precision: 0.9808 - val_recall: 0.8270 Epoch 21/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0545 - precision: 0.9688 - recall: 0.7953 - val_loss: 0.0495 - val_precision: 0.9750 - val_recall: 0.8432 Epoch 22/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0546 - precision: 0.9572 - recall: 0.7893 - val_loss: 0.0483 - val_precision: 0.9632 - val_recall: 0.8486 Epoch 23/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0517 - precision: 0.9485 - recall: 0.8167 - val_loss: 0.0475 - val_precision: 0.9753 - val_recall: 0.8541 Epoch 24/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0512 - precision: 0.9627 - recall: 0.8223 - val_loss: 0.0473 - val_precision: 0.9695 - val_recall: 0.8595 Epoch 25/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0506 - precision: 0.9638 - recall: 0.8029 - val_loss: 0.0468 - val_precision: 0.9634 - val_recall: 0.8541 Epoch 26/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0426 - precision: 0.9470 - recall: 0.8403 - val_loss: 0.0460 - val_precision: 0.9756 - val_recall: 0.8649 Epoch 27/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0520 - precision: 0.9639 - recall: 0.8192 - val_loss: 0.0467 - val_precision: 0.9691 - val_recall: 0.8486 Epoch 28/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0502 - precision: 0.9626 - recall: 0.8314 - val_loss: 0.0463 - val_precision: 0.9753 - val_recall: 0.8541 Epoch 29/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0512 - precision: 0.9568 - recall: 0.8065 - val_loss: 0.0466 - val_precision: 0.9634 - val_recall: 0.8541 Epoch 30/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0501 - precision: 0.9590 - recall: 0.8334 - val_loss: 0.0456 - val_precision: 0.9697 - val_recall: 0.8649 Epoch 31/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0468 - precision: 0.9671 - recall: 0.8238 - val_loss: 0.0454 - val_precision: 0.9639 - val_recall: 0.8649 Epoch 32/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0498 - precision: 0.9585 - recall: 0.8270 - val_loss: 0.0452 - val_precision: 0.9524 - val_recall: 0.8649 Epoch 33/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0406 - precision: 0.9665 - recall: 0.8519 - val_loss: 0.0448 - val_precision: 0.9639 - val_recall: 0.8649 Epoch 34/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0469 - precision: 0.9570 - recall: 0.8226 - val_loss: 0.0446 - val_precision: 0.9639 - val_recall: 0.8649 Epoch 35/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0442 - precision: 0.9597 - recall: 0.8433 - val_loss: 0.0445 - val_precision: 0.9524 - val_recall: 0.8649 Epoch 36/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0417 - precision: 0.9685 - recall: 0.8583 - val_loss: 0.0445 - val_precision: 0.9639 - val_recall: 0.8649 Epoch 37/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0472 - precision: 0.9591 - recall: 0.8241 - val_loss: 0.0439 - val_precision: 0.9641 - val_recall: 0.8703 Epoch 38/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0438 - precision: 0.9725 - recall: 0.8457 - val_loss: 0.0440 - val_precision: 0.9581 - val_recall: 0.8649 Epoch 39/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0408 - precision: 0.9623 - recall: 0.8499 - val_loss: 0.0433 - val_precision: 0.9639 - val_recall: 0.8649 Epoch 40/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0429 - precision: 0.9905 - recall: 0.8285 - val_loss: 0.0436 - val_precision: 0.9581 - val_recall: 0.8649 Epoch 41/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0439 - precision: 0.9743 - recall: 0.8526 - val_loss: 0.0433 - val_precision: 0.9641 - val_recall: 0.8703 Epoch 42/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0364 - precision: 0.9735 - recall: 0.8582 - val_loss: 0.0435 - val_precision: 0.9697 - val_recall: 0.8649 Epoch 43/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0405 - precision: 0.9742 - recall: 0.8338 - val_loss: 0.0430 - val_precision: 0.9641 - val_recall: 0.8703 Epoch 44/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0403 - precision: 0.9779 - recall: 0.8427 - val_loss: 0.0431 - val_precision: 0.9641 - val_recall: 0.8703 Epoch 45/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0411 - precision: 0.9630 - recall: 0.8493 - val_loss: 0.0433 - val_precision: 0.9641 - val_recall: 0.8703 Epoch 46/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - loss: 0.0400 - precision: 0.9749 - recall: 0.8556 - val_loss: 0.0433 - val_precision: 0.9641 - val_recall: 0.8703 Epoch 47/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 0.0394 - precision: 0.9709 - recall: 0.8490 - val_loss: 0.0429 - val_precision: 0.9583 - val_recall: 0.8703 Epoch 48/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0424 - precision: 0.9783 - recall: 0.8539 - val_loss: 0.0432 - val_precision: 0.9581 - val_recall: 0.8649 Epoch 49/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - loss: 0.0427 - precision: 0.9747 - recall: 0.8441 - val_loss: 0.0421 - val_precision: 0.9641 - val_recall: 0.8703 Epoch 50/50 521/521 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.0393 - precision: 0.9812 - recall: 0.8320 - val_loss: 0.0421 - val_precision: 0.9583 - val_recall: 0.8703
plot(history,'loss')
plot(history,'precision')
plot(history,'recall')
model_6_train_perf = model_performance_classification(model_6, X_train, y_train)
model_6_train_perf
521/521 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.99244 | 0.937488 | 0.989672 | 0.961923 |
model_6_val_perf = model_performance_classification(model_6, X_val, y_val)
model_6_val_perf
105/105 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.990702 | 0.934024 | 0.975376 | 0.953636 |
results.loc[6] = [
2,
'[128,64]',
'[tanh,tanh]',
50,
32,
'Adam',
'[0.0001, -]',
'[he, xavier]',
'Dropout=0.3',
float(history.history["loss"][-1]),
float(history.history["val_loss"][-1]),
float(history.history["precision"][-1]),
float(history.history["val_precision"][-1]),
float(history.history["recall"][-1]),
float(history.history["val_recall"][-1]),
round(float(end - start), 2)
]
results
| # hidden layers | # neurons - hidden layer | activation function - hidden layer | # epochs | batch size | optimizer | learning rate, momentum | weight initializer | regularization | train loss | validation loss | train precision | validation precision | train recall | validation recall | time (secs) | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | 7 | - | 10 | 16666 | sgd | [0.001, -] | xavier | - | 0.545520 | 0.487089 | 0.102785 | 0.132554 | 0.335135 | 0.367568 | 3.66 |
| 1 | 0 | - | - | 50 | 16666 | sgd | [0.001, -] | xavier | - | 0.180822 | 0.185569 | 0.490260 | 0.473214 | 0.652973 | 0.572973 | 8.85 |
| 2 | 0 | - | - | 10 | 32 | sgd | [0.001, -] | xavier | - | 0.117435 | 0.117788 | 0.782875 | 0.790698 | 0.553514 | 0.551351 | 16.12 |
| 3 | 0 | - | - | 50 | 32 | sgd | [0.001, -] | xavier | - | 0.113648 | 0.113962 | 0.840074 | 0.806723 | 0.494054 | 0.518919 | 87.11 |
| 4 | 2 | [128,64] | [tanh,tanh] | 50 | 32 | Adam | [0.0001, -] | xavier | - | 0.021311 | 0.047761 | 0.990621 | 0.959302 | 0.913514 | 0.891892 | 119.18 |
| 5 | 2 | [128,64] | [tanh,tanh] | 50 | 32 | Adam | [0.0001, -] | xavier | Dropout=0.2 | 0.037212 | 0.040524 | 0.981618 | 0.976190 | 0.865946 | 0.886486 | 117.48 |
| 6 | 2 | [128,64] | [tanh,tanh] | 50 | 32 | Adam | [0.0001, -] | [he, xavier] | Dropout=0.3 | 0.039129 | 0.042068 | 0.976543 | 0.958333 | 0.855135 | 0.870270 | 117.89 |
Model 5 offers the best balance:
High recall (catches most failures)
High precision (avoids excessive false alarms)
Good validation loss (0.038)
Now, in order to select the final model, we will compare the performances of all the models for the training and validation sets.
Training Performance Comparison
models_train_comp_df = pd.concat(
[
model_0_train_perf.T,
model_1_train_perf.T,
model_2_train_perf.T,
model_3_train_perf.T,
model_4_train_perf.T,
model_5_train_perf.T,
model_6_train_perf.T
],
axis=1,
)
models_train_comp_df.columns = [
"Model 0",
"Model 1",
"Model 2",
"Model 3",
"Model 4",
"Model 5",
"Model 6"
]
print("Training set performance comparison:")
models_train_comp_df
Training set performance comparison:
| Model 0 | Model 1 | Model 2 | Model 3 | Model 4 | Model 5 | Model 6 | |
|---|---|---|---|---|---|---|---|
| Accuracy | 0.829173 | 0.943178 | 0.966759 | 0.967479 | 0.995320 | 0.992620 | 0.992440 |
| Recall | 0.591073 | 0.806602 | 0.772787 | 0.754852 | 0.959873 | 0.937584 | 0.937488 |
| Precision | 0.537199 | 0.735131 | 0.878737 | 0.903771 | 0.995321 | 0.991482 | 0.989672 |
| F1 Score | 0.539163 | 0.765091 | 0.815960 | 0.810338 | 0.976854 | 0.962769 | 0.961923 |
Validation Performance Comparison
models_val_comp_df = pd.concat(
[
model_0_val_perf.T,
model_1_val_perf.T,
model_2_val_perf.T,
model_3_val_perf.T,
model_4_val_perf.T,
model_5_val_perf.T,
model_6_val_perf.T
],
axis=1,
)
models_val_comp_df.columns = [
"Model 0",
"Model 1",
"Model 2",
"Model 3",
"Model 4",
"Model 5",
"Model 6"
]
print("Validation set performance comparison:")
models_val_comp_df
Validation set performance comparison:
| Model 0 | Model 1 | Model 2 | Model 3 | Model 4 | Model 5 | Model 6 | |
|---|---|---|---|---|---|---|---|
| Accuracy | 0.831434 | 0.940912 | 0.967007 | 0.966407 | 0.991902 | 0.992501 | 0.990702 |
| Recall | 0.613126 | 0.767750 | 0.771389 | 0.755808 | 0.944834 | 0.942608 | 0.934024 |
| Precision | 0.545539 | 0.723906 | 0.882400 | 0.889520 | 0.976489 | 0.984779 | 0.975376 |
| F1 Score | 0.550353 | 0.743431 | 0.816185 | 0.806990 | 0.960046 | 0.962610 | 0.953636 |
Now, let's check the performance of the final model on the test set.
best_model = model_5 ## Choosing model_5
# Test set performance for the best model
best_model_test_perf = model_performance_classification(best_model,X_test,y_test)
best_model_test_perf
157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step
| Accuracy | Recall | Precision | F1 Score | |
|---|---|---|---|---|
| 0 | 0.989 | 0.912485 | 0.982302 | 0.944316 |
y_test_pred_best = best_model.predict(X_test)
cr_test_best_model = classification_report(y_test, y_test_pred_best>0.5) # Check the classification report of best model on test data.
print(cr_test_best_model)
157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step precision recall f1-score support 0 0.99 1.00 0.99 4718 1 0.97 0.83 0.89 282 accuracy 0.99 5000 macro avg 0.98 0.91 0.94 5000 weighted avg 0.99 0.99 0.99 5000
Business Recommendations
Summary
The final model (Model 5) provides a strong balance of recall and precision to support cost-effective predictive maintenance. By operationalizing this model, ReneWind can substantially reduce replacement costs, improve uptime, and optimize maintenance resources, positioning themselves as a leader in efficient renewable energy operations.